Seminar 8 (part 1): Analyze task-based fMRI for motor and emotion task#
Plan:
Goal. Use task-based fMRI to link functions ↔ brain regions, and to understand how patterns of activation may relate to impairments (relevant for patient studies).
Questions we’ll answer in this seminar.
Which regions are engaged by motor tasks?
What’s the difference between left vs. right finger/hand movement?
Which regions respond during face/emotion processing?
What does good experimental design look like (timing, conditions, QC)?
Dataset. We use the Human Connectome Project (HCP): healthy subjects, large open sample, multiple modalities. In this notebook we work with a preprocessed subset: time series parcellated into 360 Glasser parcels (180 per hemisphere).
Workflow preview. Install dependencies → define constants (subjects, parcels, TR) → load data (rest + tasks) → compute functional connectivity (FC) at rest → compute condition contrasts for MOTOR and EMOTION tasks → visualize hemispheric and network-level effects.
🧭 Data Loading, Preprocessing, and Functional Connectivity#
Orientation: Dataset Setup#
Human Connectome Project (HCP) Dataset loader#
The HCP dataset comprises resting-state and task-based fMRI from a large sample of human subjects. The NMA-curated dataset includes time series data that has been preprocessed and spatially-downsampled by aggregating within 360 regions of interest.
# @title Install dependencies
!pip install nilearn --quiet
[notice] A new release of pip is available: 23.0.1 -> 25.2
[notice] To update, run: pip install --upgrade pip
import os
import numpy as np
import matplotlib.pyplot as plt
# Necessary for visualization
from nilearn import plotting, datasets
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.2 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "/home/fedor/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fedor/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/home/fedor/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 677, in start
self.io_loop.start()
File "/home/fedor/anaconda3/lib/python3.9/site-packages/tornado/platform/asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "/home/fedor/anaconda3/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/home/fedor/anaconda3/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
File "/home/fedor/anaconda3/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 471, in dispatch_queue
await self.process_one()
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 460, in process_one
await dispatch(*args)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 367, in dispatch_shell
await result
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 662, in execute_request
reply_content = await reply_content
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/ipkernel.py", line 360, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/ipykernel/zmqshell.py", line 532, in run_cell
return super().run_cell(*args, **kwargs)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2863, in run_cell
result = self._run_cell(
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2909, in _run_cell
return runner(coro)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3106, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3309, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/home/fedor/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3369, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_17067/3111186011.py", line 3, in <cell line: 3>
import matplotlib.pyplot as plt
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/__init__.py", line 109, in <module>
from . import _api, _version, cbook, docstring, rcsetup
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/rcsetup.py", line 27, in <module>
from matplotlib.colors import Colormap, is_color_like
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/colors.py", line 56, in <module>
from matplotlib import _api, cbook, scale
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 23, in <module>
from matplotlib.ticker import (
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/ticker.py", line 136, in <module>
from matplotlib import transforms as mtransforms
File "/home/fedor/anaconda3/lib/python3.9/site-packages/matplotlib/transforms.py", line 46, in <module>
from matplotlib._path import (
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
AttributeError: _ARRAY_API not found
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Input In [2], in <cell line: 3>()
1 import os
2 import numpy as np
----> 3 import matplotlib.pyplot as plt
5 # Necessary for visualization
6 from nilearn import plotting, datasets
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/__init__.py:109, in <module>
105 from packaging.version import parse as parse_version
107 # cbook must import matplotlib only within function
108 # definitions, so it is safe to import from it here.
--> 109 from . import _api, _version, cbook, docstring, rcsetup
110 from matplotlib.cbook import MatplotlibDeprecationWarning, sanitize_sequence
111 from matplotlib.cbook import mplDeprecation # deprecated
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/rcsetup.py:27, in <module>
25 from matplotlib import _api, cbook
26 from matplotlib.cbook import ls_mapper
---> 27 from matplotlib.colors import Colormap, is_color_like
28 from matplotlib.fontconfig_pattern import parse_fontconfig_pattern
29 from matplotlib._enums import JoinStyle, CapStyle
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/colors.py:56, in <module>
54 import matplotlib as mpl
55 import numpy as np
---> 56 from matplotlib import _api, cbook, scale
57 from ._color_data import BASE_COLORS, TABLEAU_COLORS, CSS4_COLORS, XKCD_COLORS
60 class _ColorMapping(dict):
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/scale.py:23, in <module>
21 import matplotlib as mpl
22 from matplotlib import _api, docstring
---> 23 from matplotlib.ticker import (
24 NullFormatter, ScalarFormatter, LogFormatterSciNotation, LogitFormatter,
25 NullLocator, LogLocator, AutoLocator, AutoMinorLocator,
26 SymmetricalLogLocator, LogitLocator)
27 from matplotlib.transforms import Transform, IdentityTransform
30 class ScaleBase:
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/ticker.py:136, in <module>
134 import matplotlib as mpl
135 from matplotlib import _api, cbook
--> 136 from matplotlib import transforms as mtransforms
138 _log = logging.getLogger(__name__)
140 __all__ = ('TickHelper', 'Formatter', 'FixedFormatter',
141 'NullFormatter', 'FuncFormatter', 'FormatStrFormatter',
142 'StrMethodFormatter', 'ScalarFormatter', 'LogFormatter',
(...)
148 'MultipleLocator', 'MaxNLocator', 'AutoMinorLocator',
149 'SymmetricalLogLocator', 'LogitLocator')
File ~/anaconda3/lib/python3.9/site-packages/matplotlib/transforms.py:46, in <module>
43 from numpy.linalg import inv
45 from matplotlib import _api
---> 46 from matplotlib._path import (
47 affine_transform, count_bboxes_overlapping_bbox, update_path_extents)
48 from .path import Path
50 DEBUG = False
ImportError: numpy.core.multiarray failed to import
#@title Figure settings
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# The download cells will store the data in nested directories starting here:
HCP_DIR = "./data_seminar_fmri"
if not os.path.isdir(HCP_DIR):
os.mkdir(HCP_DIR)
# The data shared for NMA projects is a subset of the full HCP dataset
N_SUBJECTS = 339
# The data have already been aggregated into ROIs from the Glasesr parcellation
N_PARCELS = 360
# The acquisition parameters for all tasks were identical
TR = 0.72 # Time resolution, in sec
# The parcels are matched across hemispheres with the same order
HEMIS = ["Right", "Left"]
# Each experiment was repeated multiple times in each subject
N_RUNS_REST = 4
N_RUNS_TASK = 2
# Time series data are organized by experiment, with each experiment
# having an LR and RL (phase-encode direction) acquistion
BOLD_NAMES = [
"rfMRI_REST1_LR", "rfMRI_REST1_RL",
"rfMRI_REST2_LR", "rfMRI_REST2_RL",
"tfMRI_MOTOR_RL", "tfMRI_MOTOR_LR",
"tfMRI_WM_RL", "tfMRI_WM_LR",
"tfMRI_EMOTION_RL", "tfMRI_EMOTION_LR",
"tfMRI_GAMBLING_RL", "tfMRI_GAMBLING_LR",
"tfMRI_LANGUAGE_RL", "tfMRI_LANGUAGE_LR",
"tfMRI_RELATIONAL_RL", "tfMRI_RELATIONAL_LR",
"tfMRI_SOCIAL_RL", "tfMRI_SOCIAL_LR"
]
# You may want to limit the subjects used during code development.
# This will use all subjects:
subjects = range(N_SUBJECTS)
Where data live and how they are organized
📂 HCP_DIR — our working folder. All data (rest, tasks, events, parcellation) will be downloaded here.
👥 N_SUBJECTS = 339 — number of participants in our subset of HCP. Loops will iterate over these subjects.
🧩 N_PARCELS = 360 — Glasser atlas parcels (instead of voxels). Each fMRI run → matrix 360 × T.
⏱ TR = 0.72 sec — time per frame. Needed to convert event timings (onset/duration) into frames and to model HRF delay (~5 TR).
🧠 HEMIS = [“Right”, “Left”] — parcel order matches hemispheres: first right, then left. Makes it easy to split contrasts into hemispheres.
🔁 N_RUNS_REST = 4, N_RUNS_TASK = 2 — number of runs for rest and for each task. Helps when concatenating runs.
🎞 BOLD_NAMES = […] — names of all acquisitions. Each task has 2 runs: LR and RL phase-encode directions. Helper functions use this list to find the correct runs.
🔢 subjects = range(N_SUBJECTS) — indices of all participants.
Downloading data#
The resting‐state and task fMRI data are provided as separate archive files, but once unpacked they share the same directory structure.
Each archive is fairly large and may take some time to download. If your analysis is focused only on resting state or task fMRI, you can choose to download just the corresponding dataset.
We also provide a separate archive with behavioral covariates that may be useful for complementary analyses.
The code below programmatically downloads these archives from the OSF (Open Science Framework). Before downloading, it checks whether the file already exists locally. If it does, the download is skipped; otherwise, the file is fetched and saved into the working directory. As a result, you will have three large archives (rest, task, covariates) and one additional file with the atlas.
# @title Download the data
# @markdown Task data in `HCP_DIR/hcp_task`, rest in `HCP_DIR/hcp_rest`, covariate in `HCP_DIR/hcp`
import os, requests, tarfile
fnames = ["hcp_rest.tgz",
"hcp_task.tgz",
"hcp_covariates.tgz",
"atlas.npz"]
urls = ["https://osf.io/bqp7m/download",
"https://osf.io/s4h8j/download",
"https://osf.io/x5p4g/download",
"https://osf.io/j5kuc/download"]
for fname, url in zip(fnames, urls):
if not os.path.isfile(fname):
try:
r = requests.get(url)
except requests.ConnectionError:
print("!!! Failed to download data !!!")
else:
if r.status_code != requests.codes.ok:
print("!!! Failed to download data !!!")
else:
print(f"Downloading {fname}...")
with open(fname, "wb") as fid:
fid.write(r.content)
print(f"Download {fname} completed!")
Extracting the data#
In the next step, we unpack the three archives (rest, task, covariates) into the working folder HCP_DIR. The code checks whether the corresponding directories already exist. If they do, extraction is skipped; if not, the archives are unpacked.
After this step, the working directory HCP_DIR contains three subfolders:
hcp_rest – resting-state fMRI,
hcp_task – task-based fMRI,
hcp_covariates – behavioral covariates.
Each of these folders stores the data for individual subjects along with the associated event files and supporting information.
# @title Extract the data in `HCP_DIR`
fnames = ["hcp_covariates", "hcp_rest", "hcp_task"]
for fname in fnames:
# open file
path_name = os.path.join(HCP_DIR, fname)
if not os.path.exists(path_name):
print(f"Extracting {fname}.tgz...")
with tarfile.open(f"{fname}.tgz") as fzip:
fzip.extractall(HCP_DIR)
else:
print(f"File {fname}.tgz has already been extracted.")
Extracting hcp_covariates.tgz...
/var/folders/1v/lp_rmhsx2x5cj9kbz4wt35d00000gn/T/ipykernel_28981/1936258888.py:10: DeprecationWarning: Python 3.14 will, by default, filter extracted tar archives and reject files or modify their metadata. Use the filter argument to control this behavior.
fzip.extractall(HCP_DIR)
File hcp_rest.tgz has already been extracted.
File hcp_task.tgz has already been extracted.
Loading region information#
Downloading either dataset will create the regions.npy file, which contains the region name and network assignment for each parcel.
This file contains:
Region name (e.g., V1, Area 4, MT),
Network assignment (e.g., Visual, Somatomotor, Default Mode).
With this file you can:
Map each row of your 360 × T time-series to a specific anatomical parcel,
Aggregate results not only by ROI but also by larger functional networks.
Detailed information about the name used for each region is provided in the Supplement to Glasser et al. 2016.
Information about the network parcellation is provided in Ji et al, 2019.
dir = os.path.join(HCP_DIR, "hcp_task") # choose the data directory
regions = np.load(os.path.join(dir, "regions.npy")).T
region_info = dict(name=regions[0].tolist(),
network=regions[1],
myelin=regions[2].astype(float)
)
region_info
{'name': ['R_V1',
'R_MST',
'R_V6',
'R_V2',
'R_V3',
'R_V4',
'R_V8',
'R_4',
'R_3b',
'R_FEF',
'R_PEF',
'R_55b',
'R_V3A',
'R_RSC',
'R_POS2',
'R_V7',
'R_IPS1',
'R_FFC',
'R_V3B',
'R_LO1',
'R_LO2',
'R_PIT',
'R_MT',
'R_A1',
'R_PSL',
'R_SFL',
'R_PCV',
'R_STV',
'R_7Pm',
'R_7m',
'R_POS1',
'R_23d',
'R_v23ab',
'R_d23ab',
'R_31pv',
'R_5m',
'R_5mv',
'R_23c',
'R_5L',
'R_24dd',
'R_24dv',
'R_7AL',
'R_SCEF',
'R_6ma',
'R_7Am',
'R_7PL',
'R_7PC',
'R_LIPv',
'R_VIP',
'R_MIP',
'R_1',
'R_2',
'R_3a',
'R_6d',
'R_6mp',
'R_6v',
'R_p24pr',
'R_33pr',
'R_a24pr',
'R_p32pr',
'R_a24',
'R_d32',
'R_8BM',
'R_p32',
'R_10r',
'R_47m',
'R_8Av',
'R_8Ad',
'R_9m',
'R_8BL',
'R_9p',
'R_10d',
'R_8C',
'R_44',
'R_45',
'R_47l',
'R_a47r',
'R_6r',
'R_IFJa',
'R_IFJp',
'R_IFSp',
'R_IFSa',
'R_p9-46v',
'R_46',
'R_a9-46v',
'R_9-46d',
'R_9a',
'R_10v',
'R_a10p',
'R_10pp',
'R_11l',
'R_13l',
'R_OFC',
'R_47s',
'R_LIPd',
'R_6a',
'R_i6-8',
'R_s6-8',
'R_43',
'R_OP4',
'R_OP1',
'R_OP2-3',
'R_52',
'R_RI',
'R_PFcm',
'R_PoI2',
'R_TA2',
'R_FOP4',
'R_MI',
'R_Pir',
'R_AVI',
'R_AAIC',
'R_FOP1',
'R_FOP3',
'R_FOP2',
'R_PFt',
'R_AIP',
'R_EC',
'R_PreS',
'R_H',
'R_ProS',
'R_PeEc',
'R_STGa',
'R_PBelt',
'R_A5',
'R_PHA1',
'R_PHA3',
'R_STSda',
'R_STSdp',
'R_STSvp',
'R_TGd',
'R_TE1a',
'R_TE1p',
'R_TE2a',
'R_TF',
'R_TE2p',
'R_PHT',
'R_PH',
'R_TPOJ1',
'R_TPOJ2',
'R_TPOJ3',
'R_DVT',
'R_PGp',
'R_IP2',
'R_IP1',
'R_IP0',
'R_PFop',
'R_PF',
'R_PFm',
'R_PGi',
'R_PGs',
'R_V6A',
'R_VMV1',
'R_VMV3',
'R_PHA2',
'R_V4t',
'R_FST',
'R_V3CD',
'R_LO3',
'R_VMV2',
'R_31pd',
'R_31a',
'R_VVC',
'R_25',
'R_s32',
'R_pOFC',
'R_PoI1',
'R_Ig',
'R_FOP5',
'R_p10p',
'R_p47r',
'R_TGv',
'R_MBelt',
'R_LBelt',
'R_A4',
'R_STSva',
'R_TE1m',
'R_PI',
'R_a32pr',
'R_p24',
'L_V1',
'L_MST',
'L_V6',
'L_V2',
'L_V3',
'L_V4',
'L_V8',
'L_4',
'L_3b',
'L_FEF',
'L_PEF',
'L_55b',
'L_V3A',
'L_RSC',
'L_POS2',
'L_V7',
'L_IPS1',
'L_FFC',
'L_V3B',
'L_LO1',
'L_LO2',
'L_PIT',
'L_MT',
'L_A1',
'L_PSL',
'L_SFL',
'L_PCV',
'L_STV',
'L_7Pm',
'L_7m',
'L_POS1',
'L_23d',
'L_v23ab',
'L_d23ab',
'L_31pv',
'L_5m',
'L_5mv',
'L_23c',
'L_5L',
'L_24dd',
'L_24dv',
'L_7AL',
'L_SCEF',
'L_6ma',
'L_7Am',
'L_7PL',
'L_7PC',
'L_LIPv',
'L_VIP',
'L_MIP',
'L_1',
'L_2',
'L_3a',
'L_6d',
'L_6mp',
'L_6v',
'L_p24pr',
'L_33pr',
'L_a24pr',
'L_p32pr',
'L_a24',
'L_d32',
'L_8BM',
'L_p32',
'L_10r',
'L_47m',
'L_8Av',
'L_8Ad',
'L_9m',
'L_8BL',
'L_9p',
'L_10d',
'L_8C',
'L_44',
'L_45',
'L_47l',
'L_a47r',
'L_6r',
'L_IFJa',
'L_IFJp',
'L_IFSp',
'L_IFSa',
'L_p9-46v',
'L_46',
'L_a9-46v',
'L_9-46d',
'L_9a',
'L_10v',
'L_a10p',
'L_10pp',
'L_11l',
'L_13l',
'L_OFC',
'L_47s',
'L_LIPd',
'L_6a',
'L_i6-8',
'L_s6-8',
'L_43',
'L_OP4',
'L_OP1',
'L_OP2-3',
'L_52',
'L_RI',
'L_PFcm',
'L_PoI2',
'L_TA2',
'L_FOP4',
'L_MI',
'L_Pir',
'L_AVI',
'L_AAIC',
'L_FOP1',
'L_FOP3',
'L_FOP2',
'L_PFt',
'L_AIP',
'L_EC',
'L_PreS',
'L_H',
'L_ProS',
'L_PeEc',
'L_STGa',
'L_PBelt',
'L_A5',
'L_PHA1',
'L_PHA3',
'L_STSda',
'L_STSdp',
'L_STSvp',
'L_TGd',
'L_TE1a',
'L_TE1p',
'L_TE2a',
'L_TF',
'L_TE2p',
'L_PHT',
'L_PH',
'L_TPOJ1',
'L_TPOJ2',
'L_TPOJ3',
'L_DVT',
'L_PGp',
'L_IP2',
'L_IP1',
'L_IP0',
'L_PFop',
'L_PF',
'L_PFm',
'L_PGi',
'L_PGs',
'L_V6A',
'L_VMV1',
'L_VMV3',
'L_PHA2',
'L_V4t',
'L_FST',
'L_V3CD',
'L_LO3',
'L_VMV2',
'L_31pd',
'L_31a',
'L_VVC',
'L_25',
'L_s32',
'L_pOFC',
'L_PoI1',
'L_Ig',
'L_FOP5',
'L_p10p',
'L_p47r',
'L_TGv',
'L_MBelt',
'L_LBelt',
'L_A4',
'L_STSva',
'L_TE1m',
'L_PI',
'L_a32pr',
'L_p24'],
'network': array(['Visual1', 'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Visual2',
'Visual2', 'Somatomotor', 'Somatomotor', 'Cingulo-Oper',
'Language', 'Default', 'Visual2', 'Frontopariet', 'Frontopariet',
'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Visual2',
'Visual2', 'Visual2', 'Auditory', 'Default', 'Default',
'Dorsal-atten', 'Default', 'Frontopariet', 'Posterior-Mu',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Somatomotor', 'Cingulo-Oper', 'Cingulo-Oper',
'Somatomotor', 'Somatomotor', 'Somatomotor', 'Somatomotor',
'Cingulo-Oper', 'Cingulo-Oper', 'Cingulo-Oper', 'Language',
'Somatomotor', 'Visual2', 'Visual2', 'Language', 'Somatomotor',
'Somatomotor', 'Somatomotor', 'Somatomotor', 'Somatomotor',
'Somatomotor', 'Cingulo-Oper', 'Cingulo-Oper', 'Cingulo-Oper',
'Cingulo-Oper', 'Posterior-Mu', 'Posterior-Mu', 'Frontopariet',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Frontopariet', 'Default', 'Default',
'Posterior-Mu', 'Frontopariet', 'Cingulo-Oper', 'Default',
'Frontopariet', 'Default', 'Frontopariet', 'Frontopariet',
'Cingulo-Oper', 'Frontopariet', 'Cingulo-Oper', 'Posterior-Mu',
'Posterior-Mu', 'Frontopariet', 'Posterior-Mu', 'Frontopariet',
'Frontopariet', 'Posterior-Mu', 'Posterior-Mu', 'Language',
'Language', 'Frontopariet', 'Frontopariet', 'Cingulo-Oper',
'Somatomotor', 'Somatomotor', 'Somatomotor', 'Auditory',
'Auditory', 'Cingulo-Oper', 'Cingulo-Oper', 'Auditory',
'Cingulo-Oper', 'Cingulo-Oper', 'Orbito-Affec', 'Frontopariet',
'Orbito-Affec', 'Cingulo-Oper', 'Cingulo-Oper', 'Somatomotor',
'Language', 'Language', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Visual1', 'Ventral-Mult', 'Default', 'Auditory',
'Default', 'Posterior-Mu', 'Language', 'Default', 'Default',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Frontopariet',
'Posterior-Mu', 'Ventral-Mult', 'Language', 'Language', 'Visual2',
'Default', 'Dorsal-atten', 'Dorsal-atten', 'Visual1', 'Language',
'Frontopariet', 'Frontopariet', 'Language', 'Cingulo-Oper',
'Cingulo-Oper', 'Frontopariet', 'Posterior-Mu', 'Posterior-Mu',
'Visual2', 'Visual2', 'Visual2', 'Posterior-Mu', 'Visual2',
'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Posterior-Mu',
'Posterior-Mu', 'Visual2', 'Posterior-Mu', 'Posterior-Mu',
'Orbito-Affec', 'Cingulo-Oper', 'Somatomotor', 'Cingulo-Oper',
'Frontopariet', 'Frontopariet', 'Default', 'Auditory', 'Auditory',
'Auditory', 'Posterior-Mu', 'Posterior-Mu', 'Cingulo-Oper',
'Cingulo-Oper', 'Cingulo-Oper', 'Visual1', 'Visual2', 'Visual2',
'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Somatomotor',
'Somatomotor', 'Cingulo-Oper', 'Cingulo-Oper', 'Default',
'Visual2', 'Frontopariet', 'Frontopariet', 'Visual2', 'Visual2',
'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Visual2',
'Auditory', 'Cingulo-Oper', 'Default', 'Dorsal-atten',
'Dorsal-atten', 'Frontopariet', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Somatomotor', 'Cingulo-Oper', 'Cingulo-Oper', 'Somatomotor',
'Somatomotor', 'Somatomotor', 'Somatomotor', 'Cingulo-Oper',
'Cingulo-Oper', 'Cingulo-Oper', 'Language', 'Somatomotor',
'Visual2', 'Visual2', 'Language', 'Somatomotor', 'Somatomotor',
'Somatomotor', 'Somatomotor', 'Somatomotor', 'Somatomotor',
'Cingulo-Oper', 'Frontopariet', 'Cingulo-Oper', 'Cingulo-Oper',
'Posterior-Mu', 'Frontopariet', 'Frontopariet', 'Posterior-Mu',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Frontopariet', 'Frontopariet', 'Default', 'Posterior-Mu',
'Frontopariet', 'Cingulo-Oper', 'Default', 'Frontopariet',
'Frontopariet', 'Cingulo-Oper', 'Frontopariet', 'Cingulo-Oper',
'Frontopariet', 'Cingulo-Oper', 'Posterior-Mu', 'Posterior-Mu',
'Frontopariet', 'Posterior-Mu', 'Frontopariet', 'Frontopariet',
'Frontopariet', 'Posterior-Mu', 'Language', 'Language',
'Frontopariet', 'Frontopariet', 'Cingulo-Oper', 'Somatomotor',
'Somatomotor', 'Somatomotor', 'Auditory', 'Somatomotor',
'Cingulo-Oper', 'Cingulo-Oper', 'Auditory', 'Cingulo-Oper',
'Cingulo-Oper', 'Orbito-Affec', 'Frontopariet', 'Orbito-Affec',
'Cingulo-Oper', 'Cingulo-Oper', 'Somatomotor', 'Language',
'Language', 'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu',
'Visual1', 'Ventral-Mult', 'Default', 'Auditory', 'Default',
'Posterior-Mu', 'Language', 'Posterior-Mu', 'Default',
'Posterior-Mu', 'Posterior-Mu', 'Posterior-Mu', 'Frontopariet',
'Posterior-Mu', 'Ventral-Mult', 'Language', 'Language', 'Visual2',
'Default', 'Dorsal-atten', 'Dorsal-atten', 'Visual1', 'Language',
'Frontopariet', 'Frontopariet', 'Language', 'Cingulo-Oper',
'Cingulo-Oper', 'Frontopariet', 'Posterior-Mu', 'Posterior-Mu',
'Visual2', 'Visual2', 'Visual2', 'Posterior-Mu', 'Visual2',
'Visual2', 'Visual2', 'Visual2', 'Visual2', 'Posterior-Mu',
'Frontopariet', 'Visual2', 'Posterior-Mu', 'Posterior-Mu',
'Orbito-Affec', 'Cingulo-Oper', 'Somatomotor', 'Cingulo-Oper',
'Frontopariet', 'Frontopariet', 'Default', 'Auditory', 'Auditory',
'Auditory', 'Posterior-Mu', 'Frontopariet', 'Cingulo-Oper',
'Cingulo-Oper', 'Cingulo-Oper'], dtype='<U12'),
'myelin': array([2.209 , 2.05561, 2.1498 , 2.15347, 2.07251, 2.0378 , 2.04079,
2.16893, 2.10655, 1.87217, 1.83377, 1.80661, 2.10431, 2.31169,
2.00603, 2.05905, 1.97727, 1.92808, 2.03962, 1.95394, 1.93288,
1.9352 , 2.03289, 2.29365, 1.77188, 1.68062, 1.85286, 1.83748,
1.8366 , 1.91503, 2.04855, 1.85992, 2.02115, 1.94532, 1.9379 ,
2.01784, 1.85632, 1.80211, 1.83117, 1.96209, 1.80217, 1.81374,
1.77908, 1.71091, 1.78837, 1.88253, 1.86754, 1.98228, 1.89806,
1.93308, 1.97773, 1.91224, 2.10338, 1.94133, 1.88906, 1.77804,
1.69106, 1.73425, 1.64299, 1.72206, 1.60845, 1.65546, 1.69748,
1.62538, 1.66067, 1.87326, 1.72303, 1.70401, 1.63646, 1.65331,
1.63958, 1.66431, 1.78028, 1.73827, 1.74713, 1.70404, 1.64649,
1.73193, 1.83983, 1.84533, 1.81415, 1.71813, 1.728 , 1.66882,
1.68426, 1.66388, 1.64428, 1.62027, 1.65268, 1.63505, 1.63566,
1.71313, 1.61375, 1.73962, 1.9382 , 1.80433, 1.75735, 1.68218,
1.78714, 1.80578, 1.93147, 1.94404, 1.91036, 2.05063, 1.8897 ,
1.73097, 1.73439, 1.70266, 1.63288, 1.87838, 1.74283, 1.60536,
1.72299, 1.70418, 1.75883, 1.77239, 1.87392, 1.85364, 2.34727,
1.82408, 2.04894, 1.5962 , 1.55926, 2.01311, 1.81841, 1.91173,
1.85582, 1.65814, 1.83344, 1.72924, 1.51509, 1.5577 , 1.71958,
1.59954, 1.69369, 1.796 , 1.76263, 1.85502, 1.88276, 1.89296,
1.89379, 2.01359, 1.82357, 1.86375, 1.91046, 1.93521, 1.75675,
1.74467, 1.79174, 1.81314, 1.81889, 2.00537, 2.14946, 2.04953,
1.78628, 1.95347, 1.88777, 1.94626, 1.91999, 2.01022, 1.90728,
1.84847, 1.99532, 1.67947, 1.59506, 2.02765, 1.75158, 1.88162,
1.78577, 1.68871, 1.6651 , 1.52163, 2.12803, 2.1484 , 1.88206,
1.64032, 1.66249, 1.61117, 1.67147, 1.61144, 2.08324, 1.9167 ,
2.00571, 2.02772, 1.97225, 1.91305, 1.89331, 2.18285, 2.11511,
1.95132, 1.95364, 1.91265, 1.99618, 2.23551, 1.90125, 1.96089,
1.86995, 1.79503, 1.90006, 1.82813, 1.81748, 1.81313, 1.90261,
2.33994, 1.76696, 1.72121, 1.78718, 1.7754 , 1.77028, 1.84851,
1.92652, 1.83262, 1.92046, 1.9186 , 1.85355, 1.98015, 1.7872 ,
1.7556 , 1.77948, 1.9415 , 1.7818 , 1.74861, 1.80419, 1.73786,
1.72065, 1.79848, 1.82167, 1.89936, 1.81456, 1.83367, 1.99009,
1.88867, 2.11801, 1.98487, 1.89457, 1.92513, 1.6885 , 1.75301,
1.66379, 1.75442, 1.67094, 1.72663, 1.74624, 1.71169, 1.73787,
1.94124, 1.82075, 1.78666, 1.71063, 1.71409, 1.73797, 1.75896,
1.8944 , 1.8768 , 1.90836, 1.81717, 1.77219, 1.87181, 1.9622 ,
1.94169, 1.96545, 1.89404, 1.87476, 1.79519, 1.84291, 1.8017 ,
1.75564, 1.66138, 1.78872, 1.71465, 1.7338 , 1.78041, 1.69983,
1.76806, 1.86949, 1.84796, 1.83927, 1.74308, 1.91871, 1.88078,
1.95606, 1.97337, 1.93045, 2.04932, 1.88392, 1.76048, 1.85599,
1.80265, 1.6982 , 1.96051, 1.79861, 1.66905, 1.85497, 1.79015,
1.84252, 1.80411, 1.83782, 1.86236, 2.29115, 1.84523, 1.93961,
1.62479, 1.70388, 2.0846 , 1.8655 , 1.84911, 1.77098, 1.79456,
1.82856, 1.76106, 1.5792 , 1.70333, 1.66441, 1.68291, 1.69171,
1.67471, 1.66739, 1.72543, 1.82283, 1.79203, 1.77318, 1.8639 ,
1.70465, 1.82142, 1.80395, 1.79594, 1.80771, 1.73574, 1.71462,
1.72361, 1.69491, 1.89908, 1.96151, 1.89103, 1.72891, 1.82218,
1.75336, 1.81001, 1.79295, 1.8771 , 1.82105, 1.77241, 1.87979,
1.7062 , 1.66429, 1.97668, 1.80404, 1.92517, 1.89881, 1.81126,
1.83928, 1.60184, 2.16331, 2.20776, 1.94505, 1.7608 , 1.74175,
1.74335, 1.73082, 1.65968])}
Reflection: What is region_info?
region_info is a Python dictionary loaded from regions.npy.
It contains metadata for each of the 360 parcels in the Glasser atlas.
Typically, it has three keys:
‘name’ – list of region names
Example:
R_V1,R_MST,R_V6, …Prefix
R_= Right hemisphere,L_= Left hemisphere.For example:
R_V1→ right primary visual cortexR_4→ right primary motor cortex (Area 4)R_FEF→ frontal eye fields
‘network’ – functional network assignment for each region
Examples: Visual, Somatomotor, Dorsal Attention, Default Mode, etc.
Useful for summarizing results not only by individual ROI but also by whole networks.
‘myelin’ – numerical values (floats), e.g. 1.72, 1.69, 1.89, …
Represents the myelin map for each region (from HCP).
Higher values → higher cortical myelination (often in sensory and motor areas).
👉 In short: region_info is the lookup table that lets us interpret ROI indices (by name), group them into networks, and optionally use myelin values as extra biological information.
We also provide the parcellation on the fsaverage5 surface and approximate MNI coordinates of each region, which can be useful for visualization:
Reflection: What is parcellation?
👉 Parcellation means dividing the cortex into distinct regions (parcels),
each assumed to have relatively homogeneous structure and function.
In the Glasser atlas, the cortex is split into 360 parcels (180 per hemisphere).
Each parcel can then be treated as a single “unit” for analysis, instead of working voxel by voxel.
with np.load(f"atlas.npz") as dobj:
atlas = dict(**dobj)
atlas.keys()
dict_keys(['labels_R', 'labels_L', 'coords'])
Reflection: What is atlas.npz?
The file atlas.npz stores additional resources for visualization.
It complements regions.npy (which contains region names, networks, and myelin values) with surface maps and coordinates.
It contains three keys:
labels_R– parcellation map of the right hemisphere on the fsaverage5 cortical surface (from FreeSurfer).labels_L– parcellation map of the left hemisphere on the fsaverage5 surface.coords– approximate MNI coordinates (x, y, z) of each parcel’s centroid.
📌 These data are especially useful for visualization:
You can project contrasts or network values directly onto the cortical surface.
You can also build connectome diagrams in MNI space using the parcel coordinates.
👉 In short: atlas.npz provides the tools to move from raw matrices to interpretable brain images — coloring parcels on the surface or plotting ROI-to-ROI connections in 3D space.
🔧 Helper functions for loading HCP data#
Data loading#
Before we can analyze the tasks, we need functions that bridge the raw data files with our analysis workflow.
Each subject’s fMRI runs are stored as .npy matrices (360 parcels × timepoints).
Each task also comes with EV files (explanatory variables) describing when each condition happened (onset, duration, amplitude).
Runs are organized with unique IDs in BOLD_NAMES.
To avoid writing repetitive code every time, we define a few helper functions:
get_image_ids(name) – find the run indices (1-based) for a given experiment (“rest”, “MOTOR”, “EMOTION”, etc.).
load_single_timeseries(subject, bold_run, dir) – load one subject’s fMRI time series for a single run.
load_timeseries(subject, name, dir, …) – load all runs of a task for one subject, with options to concatenate them and to remove the parcel-wise mean.
load_evs(subject, name, condition, dir) – load the EV files for a given subject, task, and condition (e.g., “fear”, “neut”, “lf”, “rf”).
👉 Together, these functions let us:
Quickly fetch the correct time series,
Organize data by runs and conditions,
Align BOLD activity with the experimental design.
def get_image_ids(name):
"""Get the 1-based image indices for runs in a given experiment.
Args:
name (str) : Name of experiment ("rest" or name of task) to load
Returns:
run_ids (list of int) : Numeric ID for experiment image files
"""
run_ids = [
i for i, code in enumerate(BOLD_NAMES, 1) if name.upper() in code
]
if not run_ids:
raise ValueError(f"Found no data for '{name}'")
return run_ids
def load_timeseries(subject, name, dir,
runs=None, concat=True, remove_mean=True):
"""Load timeseries data for a single subject.
Args:
subject (int): 0-based subject ID to load
name (str) : Name of experiment ("rest" or name of task) to load
dir (str) : data directory
run (None or int or list of ints): 0-based run(s) of the task to load,
or None to load all runs.
concat (bool) : If True, concatenate multiple runs in time
remove_mean (bool) : If True, subtract the parcel-wise mean
Returns
ts (n_parcel x n_tp array): Array of BOLD data values
"""
# Get the list relative 0-based index of runs to use
if runs is None:
runs = range(N_RUNS_REST) if name == "rest" else range(N_RUNS_TASK)
elif isinstance(runs, int):
runs = [runs]
# Get the first (1-based) run id for this experiment
offset = get_image_ids(name)[0]
# Load each run's data
bold_data = [
load_single_timeseries(subject,
offset + run,
dir,
remove_mean) for run in runs
]
# Optionally concatenate in time
if concat:
bold_data = np.concatenate(bold_data, axis=-1)
return bold_data
def load_single_timeseries(subject, bold_run, dir, remove_mean=True):
"""Load timeseries data for a single subject and single run.
Args:
subject (int): 0-based subject ID to load
bold_run (int): 1-based run index, across all tasks
dir (str) : data directory
remove_mean (bool): If True, subtract the parcel-wise mean
Returns
ts (n_parcel x n_timepoint array): Array of BOLD data values
"""
bold_path = os.path.join(dir, "subjects", str(subject), "timeseries")
bold_file = f"bold{bold_run}_Atlas_MSMAll_Glasser360Cortical.npy"
ts = np.load(os.path.join(bold_path, bold_file))
if remove_mean:
ts -= ts.mean(axis=1, keepdims=True)
return ts
def load_evs(subject, name, condition, dir):
"""Load EV (explanatory variable) data for one task condition.
Args:
subject (int): 0-based subject ID to load
name (str) : Name of task
condition (str) : Name of condition
dir (str) : data directory
Returns
evs (list of dicts): A dictionary with the onset, duration, and amplitude
of the condition for each run.
"""
evs = []
for id in get_image_ids(name):
task_key = BOLD_NAMES[id - 1]
ev_file = os.path.join(dir, "subjects", str(subject), "EVs",
task_key, f"{condition}.txt")
ev_array = np.loadtxt(ev_file, ndmin=2, unpack=True)
ev = dict(zip(["onset", "duration", "amplitude"], ev_array))
evs.append(ev)
return evs
Task-based analysis#
So far, we can load time series for each subject and task, and we can read EV files describing when each condition occurs (e.g., “fear”, “neutral”, “left finger”). But to compare brain activity between conditions, we need to extract only the frames that correspond to each condition and then average them.
To do this, we define two helper functions:
condition_frames(run_evs, skip=0)
Converts onset/duration information into frame indices.
skip lets us ignore a few initial frames of each trial to account for the hemodynamic delay (BOLD signal lags by ~5s).
Output: for each run, a list of frame numbers that belong to the chosen condition.
selective_average(timeseries_data, ev, skip=0)
Uses condition_frames to pick out the relevant frames from the time series.
Then averages the BOLD activity across those frames for each parcel.
Output: a 1D vector (length = number of parcels), representing the mean activation map for that condition.
👉 In short, these functions let us move from:
raw fMRI signals (all frames) + event timings
→ to
condition-specific activation profiles (one vector per condition).
This is the critical step that makes it possible to compute contrasts (e.g., left finger vs. right finger, fear vs. neutral).
def condition_frames(run_evs, skip=0):
"""Identify timepoints corresponding to a given condition in each run.
Args:
run_evs (list of dicts) : Onset and duration of the event, per run
skip (int) : Ignore this many frames at the start of each trial, to account
for hemodynamic lag
Returns:
frames_list (list of 1D arrays): Flat arrays of frame indices, per run
"""
frames_list = []
for ev in run_evs:
# Determine when trial starts, rounded down
start = np.floor(ev["onset"] / TR).astype(int)
# Use trial duration to determine how many frames to include for trial
duration = np.ceil(ev["duration"] / TR).astype(int)
# Take the range of frames that correspond to this specific trial
frames = [s + np.arange(skip, d) for s, d in zip(start, duration)]
frames_list.append(np.concatenate(frames))
return frames_list
def selective_average(timeseries_data, ev, skip=0):
"""Take the temporal mean across frames for a given condition.
Args:
timeseries_data (array or list of arrays): n_parcel x n_tp arrays
ev (dict or list of dicts): Condition timing information
skip (int) : Ignore this many frames at the start of each trial, to account
for hemodynamic lag
Returns:
avg_data (1D array): Data averagted across selected image frames based
on condition timing
"""
# Ensure that we have lists of the same length
if not isinstance(timeseries_data, list):
timeseries_data = [timeseries_data]
if not isinstance(ev, list):
ev = [ev]
if len(timeseries_data) != len(ev):
raise ValueError("Length of `timeseries_data` and `ev` must match.")
# Identify the indices of relevant frames
frames = condition_frames(ev, skip)
# Select the frames from each image
selected_data = []
for run_data, run_frames in zip(timeseries_data, frames):
run_frames = run_frames[run_frames < run_data.shape[1]]
selected_data.append(run_data[:, run_frames])
# Take the average in each parcel
avg_data = np.concatenate(selected_data, axis=-1).mean(axis=-1)
return avg_data
Resting-state analyses#
Load a single run of resting-state data:
help(load_timeseries)
Help on function load_timeseries in module __main__:
load_timeseries(subject, name, dir, runs=None, concat=True, remove_mean=True)
Load timeseries data for a single subject.
Args:
subject (int): 0-based subject ID to load
name (str) : Name of experiment ("rest" or name of task) to load
dir (str) : data directory
run (None or int or list of ints): 0-based run(s) of the task to load,
or None to load all runs.
concat (bool) : If True, concatenate multiple runs in time
remove_mean (bool) : If True, subtract the parcel-wise mean
Returns
ts (n_parcel x n_tp array): Array of BOLD data values
Prints the docstring of load_timeseries: what arguments it takes (subject, experiment name, data dir, which runs, whether to concatenate, whether to remove parcel-wise mean) and what it returns (n_parcel × n_timepoint array).
get_image_ids(name="rest")
[1, 2, 3, 4]
Looks up 1-based run IDs for the REST experiment in BOLD_NAMES.
REST has four runs (REST1_LR, REST1_RL, REST2_LR, REST2_RL).
get_image_ids(name="LANGUAGE")
[13, 14]
Finds the two LANGUAGE runs (LR/RL) in BOLD_NAMES.
get_image_ids(name="MOTOR")
[5, 6]
Finds the two MOTOR runs (LR/RL) in BOLD_NAMES.
timeseries = load_timeseries(subject=0,
name="rest",
dir=os.path.join("", "hcp_rest"),
runs=1)
print(timeseries.shape) # n_parcel x n_timepoint
(360, 1200)
runs=1 means: use the second REST run (runs are 0-based inside this helper; it adds an offset from get_image_ids under the hood).
Returns a matrix 360 × 1200: 360 Glasser parcels by 1200 timepoints for that run (TR=0.72 s).
Load a concatenated resting-state timeseries (using all runs’ data) for each subject:
timeseries_rest = []
for subject in subjects:
ts_concat = load_timeseries(subject, name="rest",
dir=os.path.join("", "hcp_rest"))
timeseries_rest.append(ts_concat)
Default runs=None (and concat=True) → loads all 4 REST runs for each subject and concatenates them along time.
For HCP REST, that typically yields 360 × 4800 per subject (4 × 1200 timepoints).
timeseries_rest becomes a list with one concatenated array per subject (length = number of subjects you set earlier).
Run a simple correlation-based “functional connectivity” analysis#
Transform resting-state time series into subject-level correlation matrices, averaged them across subjects, and visualized the group-level functional connectivity structure (FC) of the brain.
Step by step
Built subject-level correlation matrices
For each subject, we took their concatenated resting-state time series (
360 × timepoints).We computed the Pearson correlation between every pair of parcels (
np.corrcoef(ts)), giving a360 × 360matrix per subject.This matrix shows how strongly the activity of two regions co-fluctuates during rest.
Stacked across subjects
We stored all subject matrices into one array
fcof shape(N_SUBJECTS, 360, 360).
Computed the group average
By averaging across subjects (
fc.mean(axis=0)), we obtained a group-level functional connectivity (FC) matrix.
Plotted the group FC
Each pixel = mean correlation between two regions across all subjects.
Red = positive correlation (regions co-activate),
Blue = negative correlation (anti-correlated activity),
White ≈ 0 = weak or no relationship.
fc = np.zeros((N_SUBJECTS, N_PARCELS, N_PARCELS))
for sub, ts in enumerate(timeseries_rest):
fc[sub] = np.corrcoef(ts)
group_fc = fc.mean(axis=0)
plt.figure()
plt.imshow(group_fc, interpolation="none", cmap="bwr", vmin=-1, vmax=1)
plt.colorbar()
plt.show()
Reflection: What we obtained?
The result is a 360 × 360 symmetric matrix, where the diagonal is always 1 (each region perfectly correlates with itself), and the off-diagonal entries show the strength of connections between different parcels.
The dominance of red indicates that most ROI pairs are positively correlated — brain regions tend to co-fluctuate during rest, which is expected in resting-state fMRI.
Clear block-like structures along the diagonal correspond to functional networks (e.g., visual, somatomotor, default mode), where regions within the same network are more strongly correlated.
White areas represent near-zero correlations (weak connections), while blue patches indicate negative correlations (anti-correlations), such as between the DMN and dorsal attention network.
Because this is averaged across all subjects, individual noise is reduced and the stable network organization of the resting brain emerges. These block patterns are the hallmark of resting-state fMRI and underpin the mapping of intrinsic functional networks.
👉 In short: the matrix reveals the brain’s network architecture at rest — strong within-network synchrony, and meaningful anti-correlations between certain networks.
Plot the profile of FC values between a particular “seed” parcel and every parcel in the dataset, separated by hemisphere:
seed_roi = "R_FEF" # We pick one parcel of interest (R_FEF = right frontal eye field).
ind = region_info["name"].index(seed_roi) # Find its index in the list of all 360 parcels.
hemi_fc = np.split(group_fc, 2) # The group functional connectivity matrix (group_fc) is 360 × 360. We split it into two halves: the first 180 parcels (right hemisphere) and the second 180 (left hemisphere). This allows us to plot FC values separately for right and left targets.
# Plot the FC profile across the right and left hemisphere target regions
plt.figure()
for i, hemi_fc in enumerate(hemi_fc): # For each hemisphere, we take the column corresponding to the seed index → correlations between the seed parcel and all parcels in that hemisphere.
plt.plot(hemi_fc[:, ind], label=f"{HEMIS[i]} hemisphere")
plt.title(f"FC for region {seed_roi}")
plt.xlabel("Target region")
plt.ylabel("Correlation (FC)")
plt.legend()
plt.show()
Reflection: 🔎 How to interpret the graph?
X-axis (Target region)
Each tick corresponds to one parcel (ROI) in the brain atlas (0–179 per hemisphere).
It’s essentially the “list of brain regions” we are comparing against the seed.
Y-axis (Correlation / FC)
Shows the strength of functional connectivity (Pearson correlation) between the seed region (R_FEF, the right frontal eye field) and each target parcel.
Values range from 0 up to ~1 here (strong positive correlation).
Two curves
Blue = connectivity profile of R_FEF with all parcels in the right hemisphere.
Orange = connectivity profile of R_FEF with all parcels in the left hemisphere.
Where the blue curve is higher → stronger intra-hemispheric connections.
Where both overlap → symmetric connectivity across hemispheres.
Peaks
Sharp rises in the curve indicate regions that are most strongly coupled with R_FEF.
These likely correspond to other areas involved in eye movement and visuospatial attention (e.g., intraparietal sulcus, supplementary eye field, visual cortex).
The next step turns the dense correlation matrix into a brain network visualization, highlighting only the most robust connections across the cortex.
plotting.view_connectome(group_fc, atlas["coords"], edge_threshold="99%")
Reflection: 🔎 How to interpret the graph?
Input matrix:
group_fc(360 × 360) — the group-average functional connectivity between all pairs of parcels.Parcel locations:
atlas[“coords”]provides the MNI coordinates of each parcel centroid (x, y, z), used to place nodes in brain space.Thresholding:
edge_threshold=”99%”keeps only the top 1% strongest connections, removing weaker links to avoid visual clutter.Plot:
plotting.view_connectome(from nilearn) produces an interactive 3D connectome plot.Nodes = parcels (placed at their MNI coordinates).
Edges = strongest functional connections (lines between nodes).
🎭 Task analyses#
So far, we have worked with resting-state data, where participants simply lie still and we study spontaneous correlations between regions.
Now we turn to task-based fMRI, where participants perform specific cognitive or motor tasks inside the scanner.
Each task is designed to probe certain brain functions (e.g., movement, memory, language, social cognition).
The EV files (explanatory variables) provide the timing of each condition: when it starts, how long it lasts, and its amplitude.
By aligning these conditions with the BOLD time series, we can extract brain responses to each experimental manipulation.
The Human Connectome Project (HCP) provides seven task paradigms.
Detailed descriptions of their protocols and timing are available here.
Below is the full list of condition names that appear in the EV files for each task:
- MOTOR: cue, lf, lh, rf, rh, t
- WM:
0bk_body, 0bk_faces, 0bk_nir, 0bk_placed, 0bk_tools,
2bk_body, 2bk_faces, 2bk_nir, 2bk_placed, 2bk_tools,
0bk_cor, 0bk_err,
2bk_cor, 2bk_err,
all_bk_cor, all_bk_err
- EMOTION: feat, neutral
- GAMBLING: loss, loss_event, win, win_event, neut_event
- LANGUAGE:
cue,
math, story
present_math, present_story,
question_math, question_story,
response_math, response_story
- RELATIONAL: error, match, relation
- SOCIAL: mental_resp, mental, other_resp, rnd
🧭 Functional difference between left finger motor task vs right finger motor task#
Load individual runs for a MOTOR task#
For each subject, load the two MOTOR runs (LR/RL) separately (concat=False).
timeseries_task = []
for subject in subjects:
timeseries_task.append(load_timeseries(subject, "motor",
dir=os.path.join(HCP_DIR, "hcp_task"),
concat=False))
This gives a list of matrices 360 × T per run.
Run a simple subtraction analysis#
help(load_evs)
Help on function load_evs in module __main__:
load_evs(subject, name, condition, dir)
Load EV (explanatory variable) data for one task condition.
Args:
subject (int): 0-based subject ID to load
name (str) : Name of task
condition (str) : Name of condition
dir (str) : data directory
Returns
evs (list of dicts): A dictionary with the onset, duration, and amplitude
of the condition for each run.
help(selective_average)
Help on function selective_average in module __main__:
selective_average(timeseries_data, ev, skip=0)
Take the temporal mean across frames for a given condition.
Args:
timeseries_data (array or list of arrays): n_parcel x n_tp arrays
ev (dict or list of dicts): Condition timing information
skip (int) : Ignore this many frames at the start of each trial, to account
for hemodynamic lag
Returns:
avg_data (1D array): Data averagted across selected image frames based
on condition timing
Reflection: 🔎 help(load_evs) and help(selective_average) remind us that:
EV files contain onset / duration / amplitude.
selective_average extracts the average signal across only the frames of interest for a condition (with optional skip to account for HRF delay).
task = "motor"
conditions = ["lf", "rf"] # Run a substraction analysis between two conditions
contrast = []
for subject in subjects:
# Get the average signal in each region for each condition
evs = [load_evs(subject, task, cond, dir=os.path.join(HCP_DIR, "hcp_task")) for cond in conditions]
avgs = [selective_average(timeseries_task[subject], ev) for ev in evs]
# Store the region-wise difference
contrast.append(avgs[0] - avgs[1])
group_contrast = np.mean(contrast, axis=0)
For each subject, compute average activation per parcel for lf and rf.
Subtract (lf − rf), so positive values mean greater activation for left-hand movement.
Average across subjects to get the group-level contrast vector (360,).
Plot group-averaged contrast value across regions:
hemi_contrasts = np.split(group_contrast, 2) # Divide the group vector into 180 ROIs per hemisphere [right, left], then plot as two lines.
plt.figure()
for i, hemi_contrast in enumerate(hemi_contrasts):
plt.plot(hemi_contrast, label=f"{HEMIS[i]} hemisphere")
plt.title(f"Contrast of {conditions[0]} - {conditions[1]}")
plt.xlabel("Region")
plt.ylabel('Contrast')
plt.legend()
plt.show()
Reflection: 🔎 Line plot of ROI-wise contrast (lf − rf)
Blue = right hemisphere, orange = left hemisphere.
Positive peaks in blue = right hemisphere regions more active for left-hand movement (expected contralateral control).
Negative dips in orange = left hemisphere regions more active for right-hand movement.
Large fluctuations align with motor/somatosensory areas (Area 4/3b, premotor cortex), sometimes neighboring attention/visual areas due to visual cues.
Plot the regional values on the surface of one hemisphere:
fsaverage = datasets.fetch_surf_fsaverage()
surf_contrast = group_contrast[atlas["labels_L"]]
plotting.view_surf(fsaverage['infl_left'], surf_contrast, vmax=15)
Map parcel values to the labels_L surface indices and visualize on the inflated left hemisphere.
Colors reflect the contrast value in each parcel.
Reflection: 🔎 Surface projection (left hemisphere)
Colors = contrast values per parcel.
In the left hemisphere, motor areas for the hand often show negative values (stronger for right-hand movement, so
lf − rf < 0).The patchy pattern reflects the granularity of the Glasser atlas.
Characterize values by functional network#
Average the contrast values within parcels belonging to each network and plot:
# Get unique network labels
network_names = np.unique(region_info["network"])
hemi_networks = np.split(region_info["network"], 2)
hemi_contrasts = np.split(group_contrast, 2)
plt.figure()
# Get and plot mean contrast value per network, by hemisphere
for hemi, hemi_network, hemi_contrast in zip(HEMIS, hemi_networks, hemi_contrasts):
network_vals = []
for network in network_names:
network_vals.append(hemi_contrast[hemi_network == network].mean())
plt.barh(network_names, network_vals, alpha=.5, label=f"{hemi} hemi")
plt.axvline(0)
plt.xlabel("Amplitude difference")
plt.title(f"Contrast of {conditions[0]} - {conditions[1]}")
plt.legend()
plt.show()
For each network (Somatomotor, Visual, Default, etc.), compute the mean contrast value across its parcels.
Plot as bar graphs, separately for left and right hemisphere.
Reflection: 🔎 Network-level bar plot
Largest differences are in the Somatomotor network:
Right hemisphere shows positive contrast (left-hand movement).
Left hemisphere shows negative contrast (right-hand movement).
Visual / Language / DMN networks show small effects (due to task cues or ancillary processes).
The vertical zero line highlights the direction of effect.
🧭 Find functional differences between fear and neutral stimulus#
Load EMOTION timeseries per run#
Load each subject’s data for a specific task, separately for each run:
timeseries_task = []
for subject in subjects:
timeseries_task.append(load_timeseries(subject, "EMOTION",
dir=os.path.join(HCP_DIR, "hcp_task"),
concat=False))
Output: list of matrices 360 × T per run.
Compute average condition profiles#
We extract average responses for fear and neutral separately:
task = "EMOTION"
conditions = ["fear", "neut"] # Run a substraction analysis between two conditions
contrast = []
for subject in subjects:
# Get the average signal in each region for each condition
evs = [load_evs(subject, task, cond, dir=os.path.join(HCP_DIR, "hcp_task")) for cond in conditions]
avgs = [selective_average(timeseries_task[subject], ev) for ev in evs]
# Store the region-wise difference
contrast.append(avgs)
group_contrast = np.mean(contrast, axis=0)
group_contrast.shape
(2, 360)
group_contrast[0] = mean profile for fear across all subjects.
group_contrast[1] = mean profile for neutral.
Line plot of condition responses#
plt.figure()
for i, cond_contrast in enumerate(group_contrast):
plt.plot(cond_contrast, label=f"{conditions[i]}")
plt.title(f"Contrast {conditions[0]} vs {conditions[1]}")
plt.xlabel("Region")
plt.ylabel('Contrast')
plt.legend()
plt.show()
Two curves: Blue = fear, Orange = neutral.
They show per-ROI activation strength for each condition (not a subtraction yet).
Peaks = regions with higher responses in fear or neutral.
Identify most different ROIs#
We compute the absolute difference between conditions for each ROI:
contrast_dict ={}
for i in range(group_contrast.shape[1]):
region_name =region_info['name'][i]
diff =np.abs(group_contrast[0][i] - group_contrast[1][i])
contrast_dict[region_name] =diff
Sort ROIs by difference to find regions that discriminate most strongly.
Typically top hits = visual-temporal areas (e.g., FFC, PIT, LO2).
sorted(contrast_dict.items(), key = lambda item: item[1], reverse =True)
[('R_FFC', np.float64(63.307041121976596)),
('L_FFC', np.float64(60.85087753784342)),
('R_PIT', np.float64(53.724442120454114)),
('L_PIT', np.float64(49.00081073473386)),
('R_LO2', np.float64(30.26129871858639)),
('R_47m', np.float64(30.152449052211765)),
('R_IFSp', np.float64(29.755655331957968)),
('L_IFSp', np.float64(25.32863595913102)),
('L_47m', np.float64(24.965314914053373)),
('L_V8', np.float64(24.84622423456267)),
('R_IFJa', np.float64(23.471535571951897)),
('R_V4t', np.float64(23.20955013757424)),
('R_V1', np.float64(21.979808779619052)),
('R_V8', np.float64(21.423659024410682)),
('L_V1', np.float64(21.300717911241136)),
('R_V3', np.float64(21.11642151704684)),
('L_VMV3', np.float64(20.655487771168723)),
('L_LO2', np.float64(20.41983589101152)),
('R_VMV3', np.float64(19.443549991742664)),
('L_V4t', np.float64(19.412135162429387)),
('L_VVC', np.float64(19.354064327646867)),
('R_TPOJ3', np.float64(18.948037067916253)),
('L_IFJa', np.float64(17.455491888781122)),
('R_VVC', np.float64(17.385673399596673)),
('L_TPOJ3', np.float64(16.93715479028215)),
('L_AIP', np.float64(16.615183163485348)),
('R_V2', np.float64(16.311466634548808)),
('R_IFJp', np.float64(16.280952497294336)),
('R_7m', np.float64(15.510395294140128)),
('R_TE2p', np.float64(15.467063483579999)),
('L_V2', np.float64(15.129375353480182)),
('L_V3', np.float64(14.889397008013301)),
('L_TF', np.float64(14.516072987257761)),
('R_AIP', np.float64(14.294729703401249)),
('L_FST', np.float64(14.121738490596034)),
('L_PFt', np.float64(14.057855072348708)),
('R_TPOJ1', np.float64(13.643748319388019)),
('R_v23ab', np.float64(13.510035148147459)),
('L_LO3', np.float64(13.41465671698501)),
('L_IPS1', np.float64(13.166299473093847)),
('R_FST', np.float64(13.074703531111929)),
('R_TF', np.float64(12.930557577328692)),
('R_PFt', np.float64(12.629902693337502)),
('L_V7', np.float64(11.971921161543799)),
('L_7PL', np.float64(11.929831624681189)),
('R_VMV1', np.float64(11.606378725849856)),
('R_IPS1', np.float64(11.553763658413171)),
('L_MIP', np.float64(11.481591475800576)),
('L_6r', np.float64(11.472149914058576)),
('R_LO3', np.float64(11.459641774083943)),
('L_MST', np.float64(11.232711955457816)),
('R_23d', np.float64(11.045258798106598)),
('L_VMV1', np.float64(10.974384055283235)),
('R_7PL', np.float64(10.92069229329045)),
('L_V3CD', np.float64(10.903855119194217)),
('R_LIPd', np.float64(10.739160286990975)),
('L_p32pr', np.float64(10.279872586344391)),
('R_V7', np.float64(10.26059190093823)),
('L_pOFC', np.float64(10.228660492570771)),
('L_TPOJ1', np.float64(10.21780490114748)),
('L_47l', np.float64(10.065063230760922)),
('L_6a', np.float64(10.002724315317042)),
('L_7m', np.float64(9.889772249314909)),
('R_V3CD', np.float64(9.832246343191118)),
('L_FOP3', np.float64(9.7648827018463)),
('L_7PC', np.float64(9.760108122838318)),
('R_FOP4', np.float64(9.749860269107074)),
('R_STSda', np.float64(9.741534482235883)),
('R_FOP3', np.float64(9.573391828407242)),
('L_46', np.float64(9.548861010388055)),
('L_47s', np.float64(9.511995178715384)),
('R_ProS', np.float64(9.497873095324675)),
('R_45', np.float64(9.42878005762902)),
('L_VIP', np.float64(9.396735426172294)),
('R_FOP2', np.float64(9.334965477642795)),
('L_FOP2', np.float64(9.281714329989542)),
('R_MIP', np.float64(9.257185976517192)),
('R_p32', np.float64(9.003066475422774)),
('R_MST', np.float64(8.99386179731721)),
('L_p24', np.float64(8.963605923003852)),
('L_FOP4', np.float64(8.952245093913316)),
('L_10r', np.float64(8.914689278476665)),
('L_23d', np.float64(8.894401586759038)),
('L_FOP1', np.float64(8.801928826049211)),
('R_TPOJ2', np.float64(8.752715613054244)),
('L_45', np.float64(8.673559820138877)),
('R_STSdp', np.float64(8.65559237994746)),
('R_46', np.float64(8.567512125983425)),
('R_PFop', np.float64(8.550141430839979)),
('R_FOP1', np.float64(8.542552617958128)),
('L_PFop', np.float64(8.517858952713217)),
('R_10v', np.float64(8.490035263999964)),
('L_13l', np.float64(8.442958495617606)),
('L_Pir', np.float64(8.409323014682261)),
('L_6d', np.float64(8.390721848673387)),
('R_p32pr', np.float64(8.217154139188203)),
('L_6v', np.float64(8.148845154423109)),
('R_VIP', np.float64(8.130114629272366)),
('R_6a', np.float64(8.110573555496545)),
('L_PFcm', np.float64(8.021377131065341)),
('L_STSva', np.float64(7.89044969237389)),
('R_MI', np.float64(7.82669304709434)),
('L_a24pr', np.float64(7.810241052078081)),
('R_6r', np.float64(7.752186192409154)),
('R_6ma', np.float64(7.67669167748455)),
('R_p24', np.float64(7.6740956067554205)),
('L_p32', np.float64(7.664049279049532)),
('R_V4', np.float64(7.635292295843)),
('R_7PC', np.float64(7.62193192602286)),
('L_V4', np.float64(7.534959498424728)),
('R_Pir', np.float64(7.522871783320326)),
('R_PF', np.float64(7.483869637077554)),
('L_STSda', np.float64(7.482803395290706)),
('L_6ma', np.float64(7.464423413360889)),
('R_6v', np.float64(7.432346129332018)),
('L_TE2p', np.float64(7.341743782988359)),
('R_PFcm', np.float64(7.324800659545167)),
('R_10r', np.float64(7.212181213489641)),
('L_a32pr', np.float64(7.179088353695423)),
('R_IP1', np.float64(7.13772435872849)),
('R_s6-8', np.float64(7.131493742737108)),
('R_a24pr', np.float64(6.996955709876321)),
('L_OP1', np.float64(6.991716902533337)),
('R_9-46d', np.float64(6.987248483539224)),
('L_SCEF', np.float64(6.977997064733247)),
('L_ProS', np.float64(6.927385812927664)),
('L_LIPv', np.float64(6.919598171441643)),
('L_STSdp', np.float64(6.909594429324999)),
('L_7Am', np.float64(6.753869842455456)),
('R_d32', np.float64(6.752638508783754)),
('R_p24pr', np.float64(6.719046635673784)),
('R_PGi', np.float64(6.710604549088489)),
('L_2', np.float64(6.684046898053467)),
('R_STSva', np.float64(6.581018054911917)),
('R_23c', np.float64(6.503554827491125)),
('L_a24', np.float64(6.460646147479754)),
('L_v23ab', np.float64(6.397062564509532)),
('R_13l', np.float64(6.346543722882329)),
('L_a9-46v', np.float64(6.327688353429575)),
('L_10v', np.float64(6.324992604064775)),
('R_MT', np.float64(6.322163299212216)),
('R_43', np.float64(6.281647043927007)),
('R_pOFC', np.float64(6.26327919393325)),
('L_7AL', np.float64(6.238625106539774)),
('R_a32pr', np.float64(6.188558977081441)),
('L_24dd', np.float64(6.17445846060939)),
('R_47s', np.float64(6.170968008118374)),
('R_a24', np.float64(6.117629579745159)),
('L_V3A', np.float64(6.116690715324441)),
('L_PF', np.float64(6.084203280793433)),
('L_9-46d', np.float64(6.041972101902241)),
('R_47l', np.float64(6.009303595147738)),
('R_7Am', np.float64(5.994727493592427)),
('R_PCV', np.float64(5.982780255251313)),
('L_PHA1', np.float64(5.93451257201729)),
('L_LO1', np.float64(5.906035280252782)),
('L_TPOJ2', np.float64(5.867277933253896)),
('L_PeEc', np.float64(5.808062147113488)),
('R_25', np.float64(5.799825725997608)),
('R_55b', np.float64(5.6817551086261915)),
('R_a10p', np.float64(5.675739138333879)),
('R_V3A', np.float64(5.64004573654605)),
('R_PeEc', np.float64(5.624871837801528)),
('L_PGi', np.float64(5.6231085999646115)),
('R_6d', np.float64(5.612425056544032)),
('R_52', np.float64(5.5100674818838336)),
('R_RSC', np.float64(5.4046648483904995)),
('L_d32', np.float64(5.392314326817344)),
('R_DVT', np.float64(5.380023029891655)),
('R_7AL', np.float64(5.375919589900224)),
('R_PEF', np.float64(5.351257903580267)),
('L_POS2', np.float64(5.309155286309758)),
('L_RSC', np.float64(5.276735391617825)),
('R_24dv', np.float64(5.25957721934145)),
('L_23c', np.float64(5.160993333128655)),
('L_24dv', np.float64(5.112012760326843)),
('L_MI', np.float64(5.0780829474514615)),
('L_POS1', np.float64(5.033194668729664)),
('L_25', np.float64(4.99252156534754)),
('R_IP0', np.float64(4.961341855010209)),
('R_TE1p', np.float64(4.9556770893009645)),
('R_STV', np.float64(4.9059618852535065)),
('L_43', np.float64(4.846414996053035)),
('L_STV', np.float64(4.8076325502799815)),
('R_PHA1', np.float64(4.790320269376121)),
('R_SCEF', np.float64(4.691049162715298)),
('L_6mp', np.float64(4.684753058083328)),
('L_A1', np.float64(4.657406991030115)),
('L_IFJp', np.float64(4.646986722449531)),
('R_a9-46v', np.float64(4.614587366017738)),
('L_31pd', np.float64(4.488305383490817)),
('R_TGv', np.float64(4.4198859229334815)),
('R_PHT', np.float64(4.401023131989943)),
('L_1', np.float64(4.388044455484202)),
('L_PCV', np.float64(4.346810160908245)),
('R_2', np.float64(4.222112799884482)),
('R_V6', np.float64(4.181275696606226)),
('L_V3B', np.float64(4.005725903133477)),
('R_9p', np.float64(3.9906022699528423)),
('L_PH', np.float64(3.9820946333376606)),
('L_5mv', np.float64(3.9657981928235593)),
('R_31pd', np.float64(3.92594133448444)),
('R_PoI2', np.float64(3.8722691523935366)),
('R_RI', np.float64(3.8493183980239647)),
('R_LIPv', np.float64(3.833096857166577)),
('R_A4', np.float64(3.8107757974512166)),
('L_RI', np.float64(3.7732849431313538)),
('R_8C', np.float64(3.756464985085234)),
('L_s6-8', np.float64(3.7550845330571554)),
('L_TE1p', np.float64(3.747944847362682)),
('L_MT', np.float64(3.729437422725678)),
('R_6mp', np.float64(3.7251869959249464)),
('L_PoI2', np.float64(3.7013100432787374)),
('R_TA2', np.float64(3.6837807574294414)),
('L_PHT', np.float64(3.675847688006927)),
('L_LBelt', np.float64(3.5455431240088604)),
('R_PFm', np.float64(3.518607017717024)),
('R_PoI1', np.float64(3.5128270392712397)),
('L_V6A', np.float64(3.5070695670151615)),
('R_5mv', np.float64(3.5045480067827226)),
('L_p24pr', np.float64(3.455649793140929)),
('R_d23ab', np.float64(3.4269878596986656)),
('L_TGd', np.float64(3.392967084050726)),
('L_52', np.float64(3.373367798322961)),
('L_STSvp', np.float64(3.3389103909142257)),
('L_9m', np.float64(3.2948785140195946)),
('L_PreS', np.float64(3.27834710918612)),
('L_7Pm', np.float64(3.2589755618583887)),
('R_OP4', np.float64(3.242817663749041)),
('L_LIPd', np.float64(3.212859465753193)),
('L_V6', np.float64(3.121989275978074)),
('L_i6-8', np.float64(3.112868111421971)),
('L_a47r', np.float64(3.101491397884601)),
('L_OP4', np.float64(3.080651842542446)),
('L_OFC', np.float64(3.05209772740665)),
('L_p9-46v', np.float64(3.011714843019541)),
('L_IP2', np.float64(2.986319145889216)),
('L_a10p', np.float64(2.9646002691738444)),
('R_V6A', np.float64(2.950569684009023)),
('R_9m', np.float64(2.899159106136481)),
('L_IFSa', np.float64(2.885466138131436)),
('R_PSL', np.float64(2.8413143296477346)),
('L_VMV2', np.float64(2.8210928666563317)),
('R_Ig', np.float64(2.79397990072971)),
('R_3a', np.float64(2.7246819967368787)),
('L_p47r', np.float64(2.6986074210390365)),
('R_PHA3', np.float64(2.68898291982508)),
('L_PoI1', np.float64(2.6600235891055535)),
('L_p10p', np.float64(2.6028992295555224)),
('R_TE1a', np.float64(2.5717366779846698)),
('R_PreS', np.float64(2.564485548625068)),
('R_SFL', np.float64(2.5602873717465027)),
('R_24dd', np.float64(2.559361204046577)),
('R_A1', np.float64(2.4848208871449153)),
('R_FEF', np.float64(2.443981467703934)),
('R_POS1', np.float64(2.4083908277448915)),
('R_11l', np.float64(2.401558907922535)),
('R_AAIC', np.float64(2.394745001189725)),
('L_TE1a', np.float64(2.3459206640510213)),
('R_TGd', np.float64(2.3418523860472105)),
('L_10d', np.float64(2.2886438684040686)),
('R_p9-46v', np.float64(2.278759771626291)),
('R_OP1', np.float64(2.2659676405161795)),
('R_p47r', np.float64(2.256344877697532)),
('R_IP2', np.float64(2.1463541180141776)),
('L_3b', np.float64(2.1125319186350366)),
('L_31a', np.float64(2.0848772909369764)),
('L_FOP5', np.float64(2.068718903997949)),
('R_PI', np.float64(2.0617478593620606)),
('L_PHA3', np.float64(2.017434038514711)),
('R_31a', np.float64(1.9945467524297809)),
('L_8Ad', np.float64(1.9475243586985105)),
('L_11l', np.float64(1.9137270647648323)),
('L_9p', np.float64(1.8697637146273922)),
('L_A4', np.float64(1.8299419163355348)),
('L_DVT', np.float64(1.794668041867292)),
('R_IFSa', np.float64(1.762818772238428)),
('R_AVI', np.float64(1.7626705974566683)),
('L_PSL', np.float64(1.7600327085365204)),
('R_p10p', np.float64(1.743221802199646)),
('L_SFL', np.float64(1.7284662951506755)),
('R_PH', np.float64(1.7202837372610014)),
('R_7Pm', np.float64(1.7124172603930412)),
('R_LO1', np.float64(1.6784833552749348)),
('L_s32', np.float64(1.6742010487691488)),
('R_PBelt', np.float64(1.6571911950012974)),
('R_8Av', np.float64(1.6558520206037841)),
('L_H', np.float64(1.6134546561319842)),
('L_5L', np.float64(1.5565854277673894)),
('L_EC', np.float64(1.5385202821688961)),
('L_IP0', np.float64(1.457139961078791)),
('L_55b', np.float64(1.4429835169215544)),
('R_PGs', np.float64(1.4414005329507287)),
('L_PFm', np.float64(1.438088483078286)),
('R_5m', np.float64(1.3988504838884626)),
('L_IP1', np.float64(1.3665733643865798)),
('L_PHA2', np.float64(1.3562841737942324)),
('L_PGs', np.float64(1.339817813794218)),
('R_H', np.float64(1.3331246523405227)),
('L_8C', np.float64(1.2572300623543524)),
('L_8Av', np.float64(1.243398313273467)),
('L_9a', np.float64(1.241273082154121)),
('L_8BM', np.float64(1.206509442802173)),
('R_POS2', np.float64(1.1898590556130446)),
('R_s32', np.float64(1.1702375157710816)),
('L_4', np.float64(1.1565605424445602)),
('R_3b', np.float64(1.1549771005106226)),
('L_33pr', np.float64(1.1506619188625105)),
('L_8BL', np.float64(1.130140647806599)),
('L_Ig', np.float64(1.114492220277715)),
('R_4', np.float64(1.1142935031525376)),
('L_5m', np.float64(1.0960699796598976)),
('R_TE1m', np.float64(1.0881557976487017)),
('L_TA2', np.float64(1.07971531260609)),
('R_OP2-3', np.float64(1.0466015101523465)),
('R_31pv', np.float64(0.981416909477413)),
('R_a47r', np.float64(0.9728977593425953)),
('L_PGp', np.float64(0.934295298475676)),
('L_STGa', np.float64(0.9204470835965881)),
('R_10pp', np.float64(0.905514625574459)),
('R_5L', np.float64(0.8888826699103514)),
('L_TGv', np.float64(0.8730312063196934)),
('R_10d', np.float64(0.866062776245843)),
('L_FEF', np.float64(0.8478287778684495)),
('R_A5', np.float64(0.8221089051281121)),
('L_AVI', np.float64(0.8180375675038385)),
('R_V3B', np.float64(0.8010883407807228)),
('L_PBelt', np.float64(0.7372418814532513)),
('L_10pp', np.float64(0.7317643180982639)),
('L_PI', np.float64(0.714697354839252)),
('R_TE2a', np.float64(0.6847087682794673)),
('L_OP2-3', np.float64(0.6816352883082278)),
('R_44', np.float64(0.6814631809970422)),
('R_OFC', np.float64(0.6474475490734598)),
('L_31pv', np.float64(0.6351160877719046)),
('L_TE2a', np.float64(0.6277364894012698)),
('R_STSvp', np.float64(0.5800644195788995)),
('L_44', np.float64(0.5622593925485824)),
('R_EC', np.float64(0.554593033176205)),
('R_LBelt', np.float64(0.5506168349735283)),
('R_PHA2', np.float64(0.5404407467886612)),
('R_i6-8', np.float64(0.5051827243455816)),
('L_A5', np.float64(0.4569824115224157)),
('R_VMV2', np.float64(0.4360622156307463)),
('R_PGp', np.float64(0.3915069648389138)),
('R_MBelt', np.float64(0.3852454123932839)),
('L_PEF', np.float64(0.3745116699552069)),
('L_TE1m', np.float64(0.35235759900044017)),
('L_3a', np.float64(0.3441698679210667)),
('R_STGa', np.float64(0.31614428981784487)),
('L_MBelt', np.float64(0.28030588216492736)),
('L_d23ab', np.float64(0.2796445344615619)),
('R_8BL', np.float64(0.2742837308246471)),
('L_AAIC', np.float64(0.27013764291321835)),
('R_9a', np.float64(0.22639408268694217)),
('R_1', np.float64(0.19731545177941467)),
('R_8BM', np.float64(0.1591206701675688)),
('R_8Ad', np.float64(0.1061737963204159)),
('R_33pr', np.float64(0.05269033128953271)),
('R_FOP5', np.float64(0.03996554463892732))]
Interpretation: these regions are especially sensitive to emotional face stimuli.
Aggregate by functional networks#
We compute average activation per network for each condition:
# Get unique network labels
network_names = np.unique(region_info["network"])
plt.figure()
# Get and plot mean contrast value per network, by condition
for i, contrast in enumerate(group_contrast):
network_vals = []
for network in network_names:
network_vals.append(contrast[region_info["network"] == network].mean())
plt.barh(network_names, network_vals, alpha=.5, label=f"{conditions[i]}")
plt.axvline(0)
plt.xlabel("Amplitude difference")
plt.title(f"Contrast of {conditions[0]} - {conditions[1]}")
plt.legend()
plt.show()
Horizontal bars show mean activity across networks.
Visual and attention-related networks show largest differences, while Language/DMN remain closer to zero.
Reflection: 🔎 What the plots show
Line plots: Blue (fear) vs orange (neutral). Where blue > orange, fear condition evokes stronger response. Peaks usually appear in occipital/temporal ROIs (FFA / PIT / FFC).
ROI ranking: Sorted list of regions by |fear − neutral| highlights those most distinct between conditions.
Network-level bars: Visual and dorsal-attention networks show strong differences; other networks (e.g., DMN, Language) have weaker or near-zero effects.
Putting It Together: What the Results Mean#
Resting state. Reveals intrinsic network architecture (strong within-network FC, known anti-correlations).
Motor task. Clear contralateral somatomotor activation (left movement → right hemisphere; right movement → left hemisphere).
Emotion task. Stronger responses in visual/ventral temporal regions to faces, with fear vs neutral differences concentrated in face/visual pathways and attention systems.
Networks & atlas. Parcel-level findings are interpretable via network labels (Somatomotor, Visual, DMN, etc.); myelin values provide biological context but are optional here.
Next Steps#
Move from raw activation/contrasts to features: parcel means, functional connectivity (task or rest), network summaries, graph measures.
Use these features for machine learning (e.g., classification, regression) to study disorders or behavioral phenotypes; evaluate importance of features and interpretability.